15 research outputs found

    Digital forensic analysis methodology for private browsing: Firefox and Chrome on Linux as a case study

    Get PDF
    The web browser has become one of the basic tools of everyday life. A tool that is increasingly used to manage personal information. This has led to the introduction of new privacy options by the browsers, including private mode. In this paper, a methodology to explore the effectiveness of the private mode included in most browsers is proposed. A browsing session was designed and conducted in Mozilla Firefox and Google Chrome running on four different Linux environments. After analyzing the information written to disk and the information available in memory, it can be observed that Firefox and Chrome did not store any browsing-related information on the hard disk. However, memory analysis reveals that a large amount of information could be retrieved in some of the environments tested. For example, for the case where the browsers were executed in a VMware virtual machine, it was possible to retrieve most of the actions performed, from the keywords entered in a search field to the username and password entered to log in to a website, even after restarting the computer. In contrast, when Firefox was run on a slightly hardened non-virtualized Linux, it was not possible to retrieve any browsing-related artifacts after the browser was closedS

    Assessment, Design and Implementation of a Private Cloud for MapReduce Applications

    Get PDF
    Scientific computation and data intensive analyses are ever more frequent. On the one hand, the MapReduce programming model has gained a lot of attention for its applicability in large parallel data analyses and Big Data applications. On the other hand, Cloud computing seems to be increasingly attractive in solving these computing problems that demand a lot of resources. This paper explores the potential symbiosis between MapReduce and Cloud Computing, in order to create a robust and scalable environment to execute MapReduce workflows regardless of the underlaying infrastructure. The main goal of this work is to provide an easy-to-install interface, so as non-expert scientists can deploy a suitable testbed for their MapReduce experiments on local resources of their institution. Testing cases were performed in order to evaluate the required time for the whole executing process on a real clusterS

    Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing

    Get PDF
    Road extraction from Light Detection and Ranging (LiDAR) has become a hot topic over recent years. Nevertheless, it is still challenging to perform this task in a fully automatic way. Experiments are often carried out over small datasets with a focus on urban areas and it is unclear how these methods perform in less urbanized sites. Furthermore, some methods require the manual input of critical parameters, such as an intensity threshold. Aiming to address these issues, this paper proposes a method for the automatic extraction of road points suitable for different landscapes. Road points are identified using pipeline filtering based on a set of constraints defined on the intensity, curvature, local density, and area. We focus especially on the intensity constraint, as it is the key factor to distinguish between road and ground points. The optimal intensity threshold is established automatically by an improved version of the skewness balancing algorithm. Evaluation was conducted on ten study sites with different degrees of urbanization. Road points were successfully extracted in all of them with an overall completeness of 93%, a correctness of 83%, and a quality of 78%. These results are competitive with the state-of-the-artThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04 and reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported in part by Babcock International Group PLC (Civil UAVs Initiative Fund of Xunta de Galicia) and the Ministry of Education, Culture and Sport, Government of Spain (Grant Number TIN2016-76373-P)S

    A fast and optimal pathfinder using airborne LiDAR data

    Get PDF
    Determining the optimal path between two points in a 3D point cloud is a problem that have been addressed in many different situations: from road planning and escape routes determination, to network routing and facility layout. This problem is addressed using different input information, being 3D point clouds one of the most valuables. Its main utility is to save costs, whatever the field of application is. In this paper, we present a fast algorithm to determine the least cost path in an Airborne Laser Scanning point cloud. In some situations, like finding escape routes for instance, computing the solution in a very short time is crucial, and there are not many works developed in this theme. State of the art methods are mainly based on a digital terrain model (DTM) for calculating these routes, and these methods do not reflect well the topography along the edges of the graph. Also, the use of a DTM leads to a significant loss of both information and precision when calculating the characteristics of possible routes between two points. In this paper, a new method that does not require the use of a DTM and is suitable for airborne point clouds, whether they are classified or not, is proposed. The problem is modeled by defining a graph using the information given by a segmentation and a Voronoi Tessellation of the point cloud. The performance tests show that the algorithm is able to compute the optimal path between two points by processing up to 678,820 points per second in a point cloud of 40,000,000 points and 16 km² of extensionThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04, reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported by the Ministry of Economy and Competitiveness, Government of Spain (Grant No. PID2019-104834 GB-I00). We also acknowledge the Centro de Supercomputación de Galicia (CESGA) for the use of their computersS

    SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data

    Get PDF
    Next-generation sequencing (NGS) technologies have led to a huge amount of genomic data that need to be analyzed and interpreted. This fact has a huge impact on the DNA sequence alignment process, which nowadays requires the mapping of billions of small DNA sequences onto a reference genome. In this way, sequence alignment remains the most time-consuming stage in the sequence analysis workflow. To deal with this issue, state of the art aligners take advantage of parallelization strategies. However, the existent solutions show limited scalability and have a complex implementation. In this work we introduce SparkBWA, a new tool that exploits the capabilities of a big data technology as Spark to boost the performance of one of the most widely adopted aligner, the Burrows-Wheeler Aligner (BWA). The design of SparkBWA uses two independent software layers in such a way that no modifications to the original BWA source code are required, which assures its compatibility with any BWA version (future or legacy). SparkBWA is evaluated in different scenarios showing noticeable results in terms of performance and scalability. A comparison to other parallel BWA-based aligners validates the benefits of our approach. Finally, an intuitive and flexible API is provided to NGS professionals in order to facilitate the acceptance and adoption of the new tool. The source code of the software described in this paper is publicly available at https://github.com/citiususc/SparkBWA, with a GPL3 licenseThis work was supported by Ministerio de Economía y Competitividad (Spain) (http://www.mineco.gob.es) grants TIN2013-41129-P and TIN2014-54565-JIN. There was no additional external funding received for this studyS

    Using an extended Roofline Model to understand data and thread affinities on NUMA systems

    Get PDF
    Today’s microprocessors include multicores that feature a diverse set of compute cores and onboard memory subsystems connected by complex communication networks and protocols. The analysis of factors that affect performance in such complex systems is far from being an easy task. Anyway, it is clear that increasing data locality and affinity is one of the main challenges to reduce the access latency to data. As the number of cores increases, the influence of this issue on the performance of parallel codes is more and more important. Therefore, models to characterize the performance in such systems are broadly demanded. This paper shows the use of an extension of the well known Roofline Model adapted to the main features of the memory hierarchy present in most of the current multicore systems. Also the Roofline Model was extended to show the dynamic evolution of the execution of a given code. In order to reduce the overheads to get the information needed to obtain this dynamic Roofline Model, hardware counters present in most of the current microprocessors are used. To illustrate its use, two simple parallel vector operations, SAXPY and SDOT, were considered. Different access strides and initial location of vectors in memory modules were used to show the influence of different scenarios in terms of locality and affinity. The effect of thread migration were also considered. We conclude that the proposed Roofline Model is an useful tool to understand and characterise the behaviour of the execution of parallel codes in multicore systemsThis work has been partially supported by the Ministry of Education and Science of Spain, FEDER funds under contract TIN 2010-17541, and Xunta de Galicia, EM2013/041. It has been developed in the framework of the European network HiPEAC and the Spanish network CAPAP-HS

    Enabling BOINC in infrastructure as a service cloud system

    Get PDF
    Volunteer or crowd computing is becoming increasingly popular for solving complex research problems from an increasingly diverse range of areas. The majority of these have been built using the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which provides a range of different services to manage all computation aspects of a project. The BOINC system is ideal in those cases where not only does the research community involved need low-cost access to massive computing resources but also where there is a significant public interest in the research being done. We discuss the way in which cloud services can help BOINC-based projects to deliver results in a fast, on demand manner. This is difficult to achieve using volunteers, and at the same time, using scalable cloud resources for short on demand projects can optimize the use of the available resources. We show how this design can be used as an efficient distributed computing platform within the cloud, and outline new approaches that could open up new possibilities in this field, using Climateprediction.net (http://www.climateprediction.net/) as a case studyS

    Fast Ground Filtering of Airborne LiDAR Data Based on Iterative Scan-Line Spline Interpolation

    Get PDF
    Over the last two decades, a wide range of applications have been developed from Light Detection and Ranging (LiDAR) point clouds. Most LiDAR-derived products require the distinction between ground and non-ground points. Because of this, ground filtering its being one of the most studied topics in the literature and robust methods are nowadays available. However, these methods have been designed to work with offline data and they are generally not well suited for real-time scenarios. Aiming to address this issue, this paper proposes an efficient method for ground filtering of airborne LiDAR data based on scan-line processing. In our proposal, an iterative 1-D spline interpolation is performed in each scan line sequentially. The final spline knots of a scan line are taken into account for the next scan line, so that valuable 2-D information is also considered without compromising computational efficiency. Points are labelled into ground and non-ground by analysing their residuals to the final spline. When tested against synthetic ground truth, the method yields a mean kappa value of 88.59% and a mean total error of 0.50%. Experiments with real data also show satisfactory results under visual inspection. Performance tests on a workstation show that the method can process up to 1 million points per second. The original implementation was ported into a low-cost development board to demonstrate its feasibility to run in embedded systems, where throughput was improved by using programmable logic hardware acceleration. Analysis shows that real-time filtering is possible in a high-end board prototype, as it can process the amount of points per second that current lightweight scanners acquire with low-energy consumptionThis work was supported by the Ministry of Education, Culture, and Sport, Government of Spain (Grant Number TIN2016-76373-P), the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016–2019, ED431G/08, and ED431C 2018/2019), and the European Union (European Regional Development Fund—ERDF)S

    A Developer-Friendly “Open Lidar Visualizer and Analyser” for Point Clouds With 3D Stereoscopic View

    Get PDF
    Light detection and ranging is being a hot topic in the remote sensing field, and the development of robust point cloud processing methods is essential for the adoption of this technology. In order to understand, evaluate, and show these methods, it is a key to visualize their outputs. Several visualization tools exist, although it is usually difficult to find the suited one for a specific application. On the one hand, proprietary (closed source) projects are not flexible enough because they cannot be modified to adapt them to particular applications. On the other hand, current open source projects lack an effortless way to create custom visualizations. For these reasons, we present Olivia, a developer-friendly open source visualization tool for point clouds. Olivia provides the backbone for any type of point cloud visualization, and it can be easily extended and tailored to meet the requirements of a specific application. It supports stereoscopic 3-D view, aiding both the evaluation and presentation of processing methods. In this paper, several cases of study are presented to demonstrate the usefulness of Olivia along with its computational performance.S

    Cloud Computing for Climate Modelling: Evaluation, Challenges and Benefits

    Get PDF
    Cloud computing is a mature technology that has already shown benefits for a wide range of academic research domains that, in turn, utilize a wide range of application design models. In this paper, we discuss the use of cloud computing as a tool to improve the range of resources available for climate science, presenting the evaluation of two different climate models. Each was customized in a different way to run in public cloud computing environments (hereafter cloud computing) provided by three different public vendors: Amazon, Google and Microsoft. The adaptations and procedures necessary to run the models in these environments are described. The computational performance and cost of each model within this new type of environment are discussed, and an assessment is given in qualitative terms. Finally, we discuss how cloud computing can be used for geoscientific modelling, including issues related to the allocation of resources by funding bodies. We also discuss problems related to computing security, reliability and scientific reproducibilityS
    corecore